The Unbearable Lightness of the Economics-Made-Fun Genre
نویسنده
چکیده
Several commentators have argued that the Economics-Made-Fun (“EMF”) genre contains very little actual economics. As such, it would seem that criticisms of EMF do not apply economics more broadly. In this paper I take a contrary view, arguing that, in fact, at a deep conceptual level, the engine of EMF analyses is precisely the engine of mainstream economics. Specifically, I argue that both EMF and mainstream economics rest on a conceptual foundation known as the Principal of the Substitution of Similars (“PSS”). Understanding how PSS leads EMF practitioners to make claims well beyond what is warranted by their analysis also offers insight into how PSS leaves mainstream economists in danger of overestimating the power and scope of their analyses. I explore the consequences of such problems through an example of economic analysis of the U.S. housing market in the lead-up to the recent financial crisis. Keywords: Methodology, Popular Economics, William Stanley Jevons, Ontology, Anthropology of Finance JEL Codes: B13, B41, Z13 In a famous scene from Monty Python’s Holy Grail, a village wise-man named Bedevere is called upon to adjudicate his fellow villagers’ claims that a woman they have brought before him is a witch. After some thought and a bit of assistance from King Arthur, Bedevere devises a test and soon the villagers are gathered before an enormous balancing scale containing the woman and a duck, certain that they are about to receive definitive verification or refutation of their conjecture. For fans of Monty Python’s brand of humor, this scene bears the hallmark of what made Monty Python great: the juxtaposition of normalcy and absurdity. It is funny, in part, because the characters are completely committed to the logically sound and patently absurd elements of 1 Presented at the Erasmus Institute for Philosophy and Economics Symposium on the “Economics Made Fun” Genre, December 10, 2010, Rotterdam, Netherlands. DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 2 the situation in equal measure. This both highlights the absurdity and pokes fun at the human tendency to take our favored norms and beliefs, no matter how strange, as self-evidently right and true. When we laugh at Bedevere and the villagers, we are also laughing at our own all-toohuman nature. Lately, there has been a version of this comedic trope propagating through the popular non-fiction bookshelves in the guise of “Economics-Made-Fun” writing. Like Bedevere, its authors claim to be revealing hidden aspects of the universe by interpreting superficial similarities between apparently unlike objects within the framework of a set of absurd assumptions to produce startling conclusions. For Bedevere and the villagers, the fact of the similar weights of a duck and woman means a witch roams among them. For Steven Levitt and Stephen Dubner (2005), the fact that some Sumo wrestlers and some teachers sometimes cheat points to some deep underlying connection between the two. EMF analyses, though, are marketed not as absurd comedy but rather as mostly straightfaced social analysis. Its practitioners do sometimes present their work as winking or tongue-incheek, but not in a sense that is meant to undermine the impression that their conclusions represent genuine and profound insights. For the most part, the work is meant to be a version of real economics and is put forth by unquestionably real economists, some of whom occupy prestigious chairs at elite institutions and regularly publish in the discipline’s top journals. Like many other critics of the EMF genre, I find the characterization of EMF work as even semi-serious social science to be misleading at best. Unlike other critics, however, I will not be taking issue here with the ways in which EMF work falls short of (or completely outside) the standards of contemporary economics.2 Rather, I will be exploring the possibility that the central analytic principle of EMF work fits quite comfortably within the norms and standards of 2 This point has been made most forcefully and comprehensively by DiNardo (2007). DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 3 contemporary mainstream economics, and that by recognizing this we can better understand some of the serious methodological problems within mainstream economics that have led to its current state of turmoil. The analytic principle I have in mind is a particularly extreme (and logically flawed) version of metaphorical explanation called “The Principle of the Substitution of Similars”, which was first introduced into economics by the late 19th century economist William Stanley Jevons. Put briefly, the PSS holds that if two objects are even superficially similar, they can be presumed to be identical with respect to certain (arbitrarily selected) essential features. I will argue below that the PSS is an important generator of many of the surprising (and unsupported) conclusions of EMF work, but that it is also a widely used generator of results in mainstream economics, with results similar to that which we encounter in EMF analyses. Worryingly, however, the version of PSS utilized in mainstream economics is more sophisticated than that employed in EMF work, and so the faultiness of its procedure is more difficult to detect—not least because the currently accepted standards of model assessment in economics are not built to detect it. The result is that we have a strain of rot at the core of our methodology that has been doing damage for a long time, and continues to do so largely unabated. The solution to this problem must begin with identifying the exact nature of the rot in order to devise a targeted response. Since EMF work employs PSS in an exaggerated manner, it is an ideal laboratory for this exploration. The paper will proceed in three sections. In section I, I will discuss the standard analytic trope of EMF analysis and show how it produces ostensibly legitimate but actually unsupported results. In section II, I will argue that Jevons’ Principle of the Substitution of Similars bears a strong resemblance to the standard trope of EMF analysis and can be helpful in understanding the relationship between EMF analysis and ostensibly legitimate economics. In section III, I present the more sophisticated version of the PSS utilized by current mainstream economics and DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 4 argue—using an example from the literature on the housing market in the lead-up to the financial crisis—how its use can lead to problems similar to those encountered in EMF work. I conclude by suggesting that judicious use of interpretive methods can be used to protect economics against the nefarious influence of the PSS. I. The Standard Trope of the Economics-Made-Fun Genre Although the EMF genre has expanded prodigiously since the publication of Freakonomics, the subtitle of that seminal work—A rogue economist explores the hidden side of everything—still captures the genre’s ethos quite well. The uncovering of previously hidden truths using economics in an unexpected way is the essence of the genre’s self-understanding. I will argue in this section, however, that close inspection of EMF analyses reveals that the conclusions reached are generally not hidden truths uncovered by economic reasoning, but rather rational reconstructions of the phenomena under study that assume a deep structure (a.k.a. hidden order) binding the phenomena together. Put another way, EMF authors are not discovering hidden truths, but rather “discovering” their own assumptions and reporting them as surprising insights. The Freakonomics chapter “What do schoolteachers and sumo wrestlers have in common?” provides a good example. The chapter is ostensibly about cheating. In it, the authors present a very broad picture of cheating—including everything from steroid use by athletes to a third grader copying from another student—and then recount several stories that are intended to be understood as instances of cheating. The story of the cheating teachers centers around the Chicago public school system’s experience with “high stakes” testing—i.e. standardized tests on which students must achieve a passing score to advance to the next grade level, and for which teachers are rewarded for good performance in their classrooms. In a study of several years of student test scores (and of a re-test of a smaller sample of classrooms), Levitt and his co-author DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 5 Brian Jacob found evidence that some teachers had altered students’ test responses after the fact to raise their scores (Jacob and Levitt 2003a, 2003b). The story of the Sumo wrestlers centers around some seemingly suspicious trends in match performance. Specifically, Levitt and his coauthor Mark Duggan examined matches in which a win was extremely important for one wrestler but not for the other, and found that the wrestlers in need of a win performed much better in those matches than they did under similar circumstances in non-crucial matches (Duggan and Levitt 2002). Levitt and Dubner suggest that the facts reviewed above are evidence that teachers and Sumo wrestlers have something in common.3 Before examining this claim in detail, we need first to make it more explicit. To dispense with a minimal reading of the title, we can simply note that there is no question that Sumo wrestlers and teachers have something in common. Any two things have something in common.4 But the authors’ suggestion is clearly something more specific and contentious than this. As the stated intent of their book is to “explore[ ] the hidden side of everything,” it seems appropriate to interpret the title of the chapter as a claim that there is a hidden connection between Sumo wrestlers and teachers that the material covered in the chapter will expose. To understand what would be necessary to make such a claim plausible, it is helpful to consider a literal reading of the stories and then trace the moves required to transform this reading into evidence of a hidden connection. What the stories indicate, literally, is that some members of a group of Chicago public school teachers and some Sumo wrestlers, under a specific set of conditions, engaged in behavior that the authors believe was a violation of a set of rules the 3 This is suggested by the title of their chapter, though they do not explicitly state this claim anywhere in the chapter. 4 For example, sneezes and the Crimean War share the common trait that both can be the subjects of sentences. DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 6 agents recognized as applying to them. If we give the authors the benefit of the doubt regarding their claims of cheating, then we could say that some Chicago teachers and some Sumo wrestlers cheated under the specific circumstances reviewed in the stories.5 The connection, so far, is a superficial similarity between an action taken by some of the wrestlers and some of the teachers. The authors’ transformation of this superficial connection between actions into a deep, hidden connection between people unfolds in two stages. First, they recast cheating (within whatever context it is encountered) as a uniform phenomenon: namely, as an outcome of incentive processing. The substance of the recasting involves first pointing out how important incentives are to our decision making, next describing various examples of behavior that would be commonly understood as cheating using the vocabulary of incentives, and finally, concluding that “[c]heating is a primordial economic act: getting more for less” (Levitt and Dubner 2005, p. 25). This immediately changes the superficial similarity between the Sumo wrestlers’ and teachers’ cheating into a deeper one: they are not merely engaging in activities that share some attributes, but, rather, are doing the same thing. To differentiate this meaning of cheating from the colloquial meaning (which is more capacious and nuanced) I will call the former “freakcheating.” The second stage in the transformation involves recasting freakcheating as the resultant of “modular incentive processing activities” (this is my term, not Levitt and Dubner’s). What I mean by this will be most easily explained by tracing out what Levitt and Dubner do to connect the uniform act of cheating to a kind of uniformity among cheaters. The first thing to note is that the uniformity of the act of freakcheating is immediately tied to a uniformity of internal process: it 5 This is not a trivial concession, and DiNardo (2007) offers compelling reasons against making it (with respect to the finding of teacher cheating as well as many of the other findings in Freakonomics ostensibly supported by rigorous statistical analysis). I make the concession only to indicate that the points I make here are not dependent upon a critique of the authors’ empirical testing methods. If the concession is not made, then my argument in this section would hold even more strongly. DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 7 involves the processing of incentives by the individuals generating the activities. So the fact that wrestlers and teachers freakcheat tells us not merely that they have done the same thing, but that they have both done so as a result of the same type of action: processing incentives. But this is still not enough to support the claim of a deep, hidden similarity between the wrestlers and the teachers. For that we need the incentive processing activity undertaken by the two groups to be deeply similar as well—otherwise the appearance of freakcheating in both contexts could just have been coincidence. The authors do not explicitly argue for this last piece of the puzzle, leaving the reader to make the connection him/herself (or to reject it). But for those who are familiar with economic reasoning, the implication is fairly clear: if one holds all other differences equal, one can see that the offending teachers and offending wrestlers identically process similar circumstances to produce freakcheating. And this is where the notion of modularity is important. One way to understand this final step (and, I would suggest, the way that best fits Levitt and Dubner’s meaning) is to imagine the wrestlers and teachers as bundles of incentive processing modules, where the actions of each module are analytically separable from all others. As such, although the teachers and wrestlers differ in many ways, and although their contexts differ in many ways, they share a particular element whose functioning manifests itself empirically if only we understand how to recognize it. This would allow us to interpret their freakcheating as evidence that each contains the module that processes similar antecedent freakcheating factors (i.e. high stakes testing and crucial matches, respectively) identically. And this would, indeed, indicate that we had uncovered a previously hidden substantial connection. In the foregoing description, for ease of exposition, I began with freakcheating and ended with the ontological position regarding action as the resultant of modular incentive processors. This is the order in which the elements of Levitt and Dubner’s analytic process unfold to the reader. But from the point of view of the practitioner, the process unfolds in the opposite DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 8 direction. Levitt and Dubner began with an ontological presumption, and it was within that ontology that their observations of the Chicago teachers and Sumo wrestlers acquired the meanings proposed in the chapter. This process can be stated in general form as follows: 1. Begin with an unconventional ontological framework (e.g. the world is populated by a complex of separable incentives, incentive processing modules and resultants of this processing called “actions”) 2. Reinterpret observed phenomena within the interpretive framework of that ontology (e.g. superficially similar actions are identical types that are the result of identical incentive processing modules acting in different contexts) 3. Claim that this reinterpretation is evidence of the sort of deep connection envisioned in one’s presumed ontology (e.g. the observation of cheating among teachers and sumo wrestlers is evidence of a deep similarity between them) This explanatory strategy is an essential and pervasive trope of EMF work. Some of the other examples of the employment of this strategy are obvious, such as the Freakonomics chapter “How is the KKK Like a Group of Real Estate Agents?” Using the same ontological framework as in the teacher-sumo wrestler chapter, the authors interpret their observation of superficially similar use of private information by Real Estate agents and the Ku Klux Klan as evidence of a deep connection between them. But it is not only these straightforward comparisons of types of people that follow the trope delineated above. In fact, virtually all of EMF work employs this trope. Take, for example, Tyler Cowen’s (2008) Discover Your Inner Economist: Use incentives to fall in love, survive your next meeting, and motivate your dentist.6 Although the book’s chapters deal with a wide range of contexts and questions, the central message of the book is that there is something essentially identical about one’s actions across all of these contexts—namely, that one is processing incentives and that if one wants to do so properly it helps to recognize that a generic 6 Numerous other examples could have been chosen as well—e.g., Harford (2009), Landsburg (1993), or Friedman (1997). Each of these employs the same basic logic as that discussed here. DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 9 processing module lies within the panoply of apparent diversity across all of these contexts. The central metaphor that Cowen chooses for the book—one’s “inner economist”—is especially telling in this regard. The inner economist is the generic incentive processing module operating within each of us. And it is important to note that this metaphor must carry its home world along with it—i.e. the same world that provides the backdrop of Freakonomics. The idea of a generic incentive processing module within each of us is only intelligible within such a world. This explanatory trope has been a smashing success for EMF authors, judging not only by book sales but also by the extent to which EMF thinking has become a “meme” in popular culture and even an academic teaching tool.7 But there are significant problems with the EMF trope, and to the extent that EMF authors want their work to be understood as semi-serious economics it is important to make these problems clear. DiNardo (2007) and Rubinstein (2006) have provided ample evidence of some of the methodological shortcomings of EMF work. DiNardo (2007), in particular, casts enough doubt on the claims of Freakonomics to place the burden of proof squarely on the shoulders of anyone who would claim that the claims should be understood as scientific results. But there is an additional reason to question the legitimacy of EMF work that would remain even if the authors addressed the methodological sloppiness identified by DiNardo; namely, that it is inherently question-begging. As discussed above, the observations that EMF authors present as evidence of deep connections between disparate phenomena could actually only count as such evidence if we already presume the deep connections to exist. As such, the evidence they adduce is incapable of answering the larger question of whether or not their claims are valid. For example, with respect to the teacher-sumo wrestler chapter in Freakonomics, the authors’ 7 Harper Collins, the publisher of Freakonomics and Superfreakonomics (Levitt and Dubner 2009), offers a range of companion teaching materials and study guides. See . DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 10 empirical analyses that were meant to test whether or not the teachers and wrestlers were cheating could only be considered evidence of a deep connection between the two groups if we had already accepted the ontological presumption that turns these instances of cheating into freakcheating. This is why what the authors are actually doing is not discovering a previously hidden connection, but rather “discovering” their initial assumptions and reporting them as such a discovery. And this is a general feature of EMF work: the surprising conclusions the authors reach are already contained in their assumptions, and the evidence they adduce to support the conclusions are only evidence of hidden connections if the assumptions are correct. In this sense, EMF work bears a disturbingly strong resemblance to Bedevere’s witch test. Bedevere claims that his proposed test—balancing the woman the villagers have brought to him against a duck—will reveal whether or not the woman is a witch. But seeing the results of the test as evidence of witchness (or lack thereof) only makes sense against the background of a set of ontological assumptions that include, inter alia, the existence of witches for whom floating in water is a constitutive property. We moderns can see that the villagers’ focus on the balancing scales is misplaced. The ultimate answer to their question lies elsewhere—i.e. in an exploration of the question of the existence of witches—and that the testing methodology they have chosen is not equipped to produce the kind of information they need. They may come up with the right answer, but it will be for the wrong reason. It is this latter problem of EMF analysis that has the most troubling implications. Unlike the problems identified by DiNardo and Rubinstein—the elucidation of which serves to separate contemporary economics from EMF—the problem I have raised is actually something that EMF shares with contemporary economics. Specifically, the philosophical justification for employing the EMF trope as a means of (ostensibly) uncovering hidden patterns in the world is one that was originally also a part of the philosophical foundation of contemporary economics. This DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 11 justification, “The Principle of the Substitution of Similars,” was the brainchild of William Stanley Jevons, and it is more than a mere historical curiosity. As I will argue in section III, it is an erroneous principle that not only allows EMF work to masquerade as semi-serious economics, but also provides cover for certain well-accepted methodological strategies within academic economics that actually possess the same essential flaw as EMF work. Before turning to this argument, however, it is necessary to give a brief review of Jevons’ principle. II. The Principle of the Substitution of Similars William Stanley Jevons is known to most contemporary economists as a member of the triumvirate of the marginalist revolution (along with Léon Walras and Carl Menger), but by his own reckoning his greatest contribution to knowledge came in the field of logic. The contribution, specifically, was the Principle of the Substitution of Similars (PSS), and it provided the analytic core of all of Jevons’ scientific endeavors, including his seminal Theory of Political Economy (Jevons 1871). Significantly for the purposes of this paper, its imprint is still discernible today as an important part of the logic underpinning both EMF work and certain wellestablished methodological strategies in mainstream academic economics. The PSS arose out of Jevons’ engagement with mathematics and logic—a passion that predated by many years his interest in political economy. Although his university training centered around natural science, it was Jevons’ studies in mathematics and logic with Augustus De Morgan that had the most profound impact on his scientific practice. In the late 1850s, Jevons began producing his first independent academic work—studies of cloud formation. In addition to yielding his first publications, this work also allowed Jevons to reflect concretely on DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 12 the nature of scientific discovery and its relationship to methodology. He became convinced that the key to all scientific discovery was the recognition of similarities in apparently disparate objects—specifically, that when a heretofore obscure phenomenon is recognized as similar to a well-understood phenomenon, we may project what is known of the latter onto the former. Jevons felt that there was something very deep about this idea, and that although in some sense it was a commonplace its implications had not yet been fully explored. Further, he had a sense of how such exploration might be possible, and it hinged on an innovation in logic. On New Years’ Eve 1862 Jevons wrote in his journal, “my logical speculations give me most confidence. I cannot disbelieve, yet I can hardly believe that in the principle of sameness I have found that which will reduce the whole theory of reasoning to one consistent lucid process” (Black and Könekamp 1972, p. 186). Jevons’ key logical innovation was to tighten the relationship between subject and predicate, specifically by replacing the standardly used copula—some form of the verb “to be”— with the mathematical symbol “=”. For Jevons, the standard copula was too ambiguous (Jevons 1958 [1887], p. 16). It signaled only that the subject was included in a class of things denoted by the predicate. And this relationship could imply many different things, which made it a complicated matter to determine what precisely could be deduced from a series of statements expressed with the standard copula. Jevons believed (building on the innovations of George Bentham and George Boole), that by specifying precisely the part of the predicate to which the subject exactly agreed, one could express statements like “A is B” as “A = [some subset or aspect of B]”.8 The great advantage of this was that one could then utilize the rules of algebra to work 8 The specific innovation of Bentham and Boole referenced here is the “quantification of the predicate.” See Jevons 1869, p. 4. DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 13 out the implications of statements. All of the complicated rules of syllogism in Aristotelian logic could thereby be done away with (Jevons 1869, p. 25). As a practical matter, what Jevons was aiming for was a way to more accurately express his view regarding the power of deep similarity. Utilizing the “=” symbol was a way of formalizing Jevons’ belief that if one could establish a sufficient degree of similarity between two objects, then one ought to be able to conclude that everything that was true of the one would be true of the other—or, as he put it in his definition of the PSS, that a “capacity of mutual replacement exist[s] in any two objects which are like or equivalent to a sufficient degree.” (Jevons 1958 [1887], p. 17) An example from an early exposition of the PSS demonstrates precisely how Jevons imagined the PSS would work in practice. Taking Nassau Senior’s definition of wealth as the subject to be explored, he writes: Sometimes we may have two definitions of the same term, and we may then equate these to each other. Thus, according to Mr. Senior, (1) Wealth = whatever has exchangeable value. (2) Wealth = whatever is useful, transferable and limited in supply. We can employ either of these to make a substitution in the other, obtaining the equation, Whatever has exchangeable value = whatever is useful, transferable, and limited in supply. (Jevons 1869: 25-6) The concluding statement follows from the preceding ones in precisely the same manner and for precisely the same reasons that “y = 3” would follow from the statements “x = y” and “x = 3”. The primary problem with the PSS, as many of Jevons’ contemporaries noted, is that transferring the notion of mathematical similarity to a non-mathematical context is no simple matter. The fact that it is possible to write a statement in which two non-mathematical entities are joined by the “=” symbol does not entail that such a relationship is possible. At the very least, DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 14 the intelligibility of such an expression would require the espousal of a radical social ontology— i.e. that either the social world is underlain with mathematical structure or that any nonmathematically structured elements are isolable from the mathematically structured elements. This would not be necessary if the “=” were understood more loosely, for instance by being interpreted as equivalent to ordinary language expressions like “is similar to” or “is the same as.” In this case, there would be no problem in interpreting sentences like the ones from Jevons’ PSS example above. But Jevons intended to preserve the mathematical sense of the “=” symbol— indeed, this preservation of the mathematical sense was absolutely crucial to his goal of importing the algebraic operation of substitution into logic.9 Given the importance of providing some kind of philosophical grounding for the possibility of mathematical equivalence in non-mathematical settings, it is surprising that Jevons devoted nearly no space to the issue in The Principles of Science, focusing instead on the construction of a system to work out the logical implications of such similarity assuming that it was possible. To many of Jevons’ contemporaries this was simply question-begging, as they pointed out in reviews of The Principles of Science and in letters to Jevons. The eminent English scientist George Herschel, for example, expressed this objection trenchantly in an 1869 letter to Jevons: And then, after all, the difficulty of reasoning correctly lies not in the mechanical application of logical formulae...but in the application of reason and observation to decide what things are similar: so similar as to admit of substitution for each other in the argument at hand; which is not 9 This is a somewhat contentious claim, as Jevons does seem to argue in The Principles of Science for a broader understanding of the “=” symbol, noting that “the meaning of the sign has...been gradually extended beyond that of common [i.e. mathematical] equality” (Jevons 1958 [1887], p. 15). However, he goes on to state that there is “some real analogy between its diverse meanings” (Jevons 1958 [1887], p. 16), and his subsequent discussion is most consistent with the view that it is the non-mathematical uses that conform with the mathematical rather than vice-versa. In any event, the mathematical usage is the most restrictive of all of the senses of equality he discusses, and so in order for that sense to be included among the many senses of equality expressed by “=” it must be the case that if there is something extra-mathematical about his meaning of “=” it could not obviate any of the mathematical meaning of the symbol. DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 15 a province of formal or Aristotelian logic, however largely supplemented by Dr Boole, Dr Thomson or yourself. (cited in Maas 2005, p. 148)10 And Herschel was not alone in these concerns—John Stuart Mill and George Boole were also critical of this aspect of Jevons’ philosophy of science.11 And even Jevons’ teacher and mentor Augustus De Morgan remained unconvinced, declining to adopt the replacement of the standard copula with the “=” symbol despite the fact that he, himself, was an early innovator in working out the connections between mathematics and logic.12 The reason for Jevons’ relative neglect of this issue was not that he disagreed with these critics that it was important, but rather that he thought that it could be addressed quite straightforwardly. Specifically, Jevons held that all phenomena possess logical structure—which, because of Jevons’ conflation of mathematics and logic, was equivalent to the position that all phenomena possess mathematical structure. And significantly for the purposes of this paper, this position was not an empirical finding or a position which Jevons supported with argument, but rather simply an article of faith. We can see this by looking closely at Jevons’ exposition of this position. He begins by proffering what he considers to be the fundamental laws of logic—i.e. “the laws which express the very nature and conditions of the discriminating and identifying powers of the mind”: 1. The Law of Identity. Whatever is, is. 2. The Law of Contradiction. A thing cannot both be and not be. 3. The Law of Duality. A thing must either be or not be. (Jevons 1958 [1887], p. 5) 10 The Boole and Thomson referred to in the quotation are the English logician and mathematician George Boole (referred to above in connection with the quantification of the predicate) and the Scottish mathematician and physicist William Thomson, Lord Kelvin. 11 Mill expressed his objections to Jevons’ logic directly in a letter to John Elliott Cairnes in December 1871, criticizing Jevons for having “a mania for encumbering questions with useless complications, and with a notation implying the existence of greater precision in the data than the questions admit of” (Mill 1963, XVII, pp. 1862-3; cited in Maas 2005, p. 97). For Boole’s critique, see Grattan-Guinness (1991). 12 For De Morgan’s review of Jevons’ logic, see Sánchez Valencia (2001). DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 16 Next, he makes the radical inference that these laws are not merely laws of thought, but further are an expression of the structure of all existence. Significantly, he does so not through rigorous argumentation, but via a dialog with an imagined interlocutor: Are not the Laws of Identity and Difference the prior conditions of all consciousness and all existence? Must they not hold true, alike of things material and immaterial? and [sic] if so can we say that they are only subjectively true or objectively true? I am inclined, in short, to regard them as true both ‘in the nature of thought and things,’ ... and I hold that they belong to the common basis of all existence. (Jevons 1958 [1877], p. 8) And this helps us to make sense of the following general description of phenomena that appears several pages earlier: In the material framework of this world, substances and forces present themselves in definite and stable combinations. ...The constituents of the globe, indeed, appear in almost endless combinations; but each combination bears its fixed character, and when resolved is found to be the compound of definite substances. (Jevons 1958 [1887], p. 2) From the previous quotation, we can see that the “framework” of which Jevons writes above encompasses all things, material and immaterial. What Jevons is expressing in these passages is nothing less than a thoroughgoing rationalist ontology. Jevons does not provide an account of why he holds this ontological position, but the fact that he does hold it takes us a long way in understanding why he would have considered the PSS to be well founded despite his critics’ misgivings. Herschel’s concern that Jevons had not paid sufficient attention to the question of how one determines what things are similar is addressed by Jevons’ ontological assumptions. For Jevons, all things are deeply similar structurally; specifically, they are modular complexes of definite substances. And this is precisely the kind of similarity we need to get the principle behind the PSS off the ground. Based solely on this, we need not worry DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 17 if statements like “Wealth = whatever is useful, transferable and limited in supply” are intelligible, but rather only whether or not—given that they are of an appropriate form—they are correct. We are now prepared to see the deep parallels between the PSS and the standard trope of EMF work discussed above. As with the EMF trope, the PSS involves beginning with an unconventional ontological position and reinterpreting observed phenomena within the interpretive framework of that ontology, and, once one successfully “tests” the description against a relevant set of data, claiming that the reinterpretation is evidence of the sort of deep connection envisioned by the presumed ontology. But just as with the EMF trope, the test that ostensibly vindicates the proposition of a deep connection between the phenomena under study will be question begging—specifically, it will be a test conducted entirely within the presumed ontology, will never touch the question of the plausibility of the ontology, and will end only in “discovering” one’s initial assumptions. We can see this by turning to another example—Jevons’ application of his logic to the following statement made by Augustus De Morgan: “He must have been rich, and if not absolutely mad was weakness itself, subjected either to bad advice or to most unfavourable circumstances.”13 Jevons claims that the statement can be equivalently understood as follows (note that the symbol “.|.” is, roughly, Jevons’ representation of “and/or”): If we assign the letters of the alphabet in succession, thus, A = he B = rich C = absolutely mad D = weakness itself E = subjected to bad advice F = subjected to most unfavorable circumstances, the proposition will take the form A = AB{C .|. D (E .|. F)} (Jevons 1958 [1887], p. 76) 13 The statement is taken from De Morgan (1858). DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 18 This is a decomposition of the subject of De Morgan’s sentence into a modular complex of attributes. To the extent that such attributes are also observed in other subjects, the latter’s decomposition would include the same letters which, by Jevons’ account, would not only have to mean precisely the same thing but would also have to be associated with the subjects in precisely the same way. Whatever else might be true of disparate subjects containing some identical attribute, they are all deeply connected by being associated with that identical attribute in an identical manner. This is precisely what Levitt and Dubner implicitly claim by interpreting the cheating of teachers and Sumo wrestlers as freakcheating generated by an identical incentive processing module, and it is logically unsound for precisely the same reason. The “discovery” of that deep connection was not established through observation, but rather was something that was embedded in the analysis by assumption. III. The PSS and Contemporary Academic Economics Unfortunately, the logic of the PSS underpins not only EMF analysis but also some ostensibly legitimate analytic strategies that are pervasive in contemporary academic economics. In brief, the principle that the only valid test of a hypothesis is its empirical success (and, therefore, that the unrealisticness per se of the hypothesis is irrelevant) allows EMF-type methods to masquerade as legitimate methods by obscuring their illegitimacy through inadequate testing procedures. In this section, I will argue for this claim and provide as support an example of academic economic writing on the housing market in the lead-up to the recent financial crisis. I will proceed by first discussing (i) the relation between PSS and contemporary economic methodology, and (ii) the safety net that is supposed to prevent the PSS from causing EMFand Bedevere-type problems on the other. I will then argue that the safety net is porous in that it is DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 19 insensitive to certain kinds of important problems for the same reason that the standard EMF trope reviewed above is logically flawed. The philosophical framework of the PSS is present in contemporary economics on two levels—first, with respect to ontological presuppositions and second with respect to the manner in which these presuppositions are ostensibly rendered harmless. To what extent contemporary economics is committed to a rationalist ontology is a matter of controversy.14 The discipline’s methodological and epistemological commitment to mathematical modeling, however, is clear. Economic analysis of social phenomena today means creating a mathematical representation (implicitly or explicitly, though usually explicitly) of the phenomena as a means of proposing and testing possible explanations of the nature and dynamics of those phenomena. And economic knowledge is the product of proper application of this methodology. As such, a milder form of the Jevonian ontological position is implicitly espoused by contemporary economics, at least provisionally—namely, that mathematical representations of (any) social phenomena are a potentially useful explanatory mechanism for those phenomena and, further, that explaining social phenomena only and always through mathematical representation does not limit the potential scope of one’s social explorations. A rationalist ontology is a sufficient grounding for such a belief, though not a necessary one. Still, although a practicing economist can be ontologically agnostic to some extent, that agnosticism must be bounded to exclude the position that the phenomena of the social world are definitely not rationally structured. Such a position would be directly at odds with the standard practice of carrying only mathematical tools in one’s toolkit. 14 I have argued elsewhere (Spiegler 2005) that contemporary economic methodology is incoherent in the absence of such a commitment. DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 20 What is supposed to keep this ontological commitment from being a rationalist blemish on the economist’s pure empiricism is the principle that the only relevant thing about a hypothesis is its agreement with empirical observation (and not, therefore, its origin). Milton Friedman’s 1953 essay provided the canonical statement within economics of this principle, but Jevons had anticipated Friedman’s position in 1874, stating quite clearly in the first edition of The Principles of Science that “[a]greement with fact is the one sole and sufficient test of a true hypothesis” (Jevons 1874, p. 138, italics in original).15 This principle is meant to provide protection against the kinds of problems that the ontological elements of PSS thinking could otherwise cause. Unlike in much EMF work, in proper science (the thought goes) one cannot view deductions made from within one’s ontological presumptions as results in themselves. Rather, they must be treated as hypotheses to be rigorously tested and to the extent they agree with empirical observation the presumed ontology can be treated merely as a creative source of theory. In effect, the ontological presumption is rendered invisible. The central problem with this position is succinctly articulated by the two prongs of the Quine-Duhem thesis.16 First, since one can never test hypotheses in isolation, any hypothesis test is actually a test of the hypothesis along with all of its framing assumptions. Second, the test itself is never a comparison of the hypothesis with brute facts of nature, but, rather, a comparison of the former with empirical observations that have already been parsed into the categories of the theory. Put another way, the test must occur entirely within the presumed ontology and, as such, is merely a test of whether an internally consistent rational reconstruction of empirical observations is possible. What is left aside is the question of whether the presumed ontology itself 15 The position has a much longer history that predates economics. Descartes was, famously, comfortable with unrealistic assumptions, holding the predictive and postdictive power of hypotheses as the only legitimate measure of their adequacy. Frustration with this position was the source of Newton’s quip “hypotheses non fingo.” 16 See Duhem (1962: esp. p. 185) and Quine (1963: esp. ch. II, §§5-6). DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 21 is plausible, and this is the door through which EMF-type problems can enter into contemporary academic economic practice. To explain precisely how his happens, I need to be a bit more precise about the current model assessment standards of contemporary economics. The basic tenets of these can be distilled into three principles: 1. The Empirical Consequences Principle (ECP): The only real kind of problem that an economic model can have is one that has empirical consequences—e.g. it makes bad predictions. 2. The Econometrics Sufficiency Principle (ESP): All modeling problems with empirical consequences—i.e. all real modeling problems—can in principle be detected econometrically. 3. The Friedman Principle (FP): Economists need recognize no constraints in their model creation besides the ECP and ESP. This statement of the model assessment standards is just an expansion of Jevons-Friedman standard cited above to include the type of testing that is considered necessary and sufficient within contemporary economics—namely, econometric methods. What is left out in these standards is a recognition of the role of ontological background assumptions not merely as creative sources of hypotheses but also as the scaffolding within which one’s data is created and one’s empirical testing occurs. Ignoring this role puts one in danger of mistakenly endorsing explanations that take superficial similarities to be evidence of deeper, more essential connections. The problem is that, based on the model assessment standards above, we have no way of verifying whether such deep connections actually exist, and if they do, whether the particular deep connection we’ve projected onto the data is the right one. In his essay “Is a Science of Comparative Possible?” Alasdair MacIntyre put the point cogently with respect to positive political science work: DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 22 [I]f we identify behavior except in terms of the intentions and therefore of the beliefs of the agents we shall risk describing what they are doing as what we would be doing if we went through that series of movements or something like it rather than what they are actually doing. Nor do we avoid this difficulty merely by finding some description of the behavior in question which both the agents themselves and the political scientist would accept. (MacIntyre 1978, p. 264) As such, using the ECP-ESP-FP complex as one’s assessment standards casts a shadow over one’s conclusions, regardless of whether or not they have met those standards. By extension, if these are used as the assessment standards for an entire discipline, then that discipline will have an a priori credibility problem with respect to any of its results.17 This is not merely a theoretical problem. If one’s model assessment standards leave one blind to important misunderstandings, then empirical consequences that are missed by one’s empirical testing procedures may end up making their first appearance as live problems in actual reality that one has failed to anticipate. This could manifest itself, for example, as a realization of a particular phenomenon that is well outside of one’s predictions, or as the failure of a policy designed on the basis of one’s models. These problems can occur because of modeling problems that are within the province of the ECP-ESP-FP complex, but can just as easily occur for reasons outside it. The recent global financial crisis and its relationship to the U.S. housing market provides one concrete example of the types of things that can be missed by the ECP-ESP-FP complex and the consequences that can result. As we now know, the central cause of the financial crisis was the concentration of investment in the U.S. housing market, particularly via derivative securities (e.g. credit default swaps and various forms of collateralized debt obligations and mortgage backed securities). The massive inflow of investment during the late 1990s and throughout most 17 This is not to say that no results coming from such a science could be legitimate and/or correct. Rather, it is to say that since the discipline’s standards are blind to the difference between legitimate and (certain types of) illegitimate results, the discipline’s stamp of approval alone cannot inspire much confidence. DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 23 of the 2000s inflated a bubble in U.S. (and, eventually non-U.S.) housing and related assets that unraveled and burst when, in 2007, housing prices began to level off and mortgage defaults began to rise. What has also become clear is that this was no simple speculative bubble. Rather, it was a bubble created and sustained in large part by a conscious commercio-regulatory strategy pursued by private financial institutions, legislators, the Chairman of the Federal Reserve, and the Treasury department. Although the intent certainly was not to cause a financial crisis, there was a clear intent to remove barriers to highly leveraged investment strategies utilizing lightly regulated and unregulated derivates.18 One result of this was that the U.S. housing market in the late 1990s and 2000s did not conform to the standard economic picture of a housing market— i.e. it was not simply a market for an asset delivering a certain type of service with attendant financing issues analyzable via the standard model of financial assets. Rather, it was a part of a commercio-regulatory gambit and, significantly, this characteristic was constitutive of the late 1990s-2000s U.S. housing market in the sense that failing to understand this fact meat failing to understand the phenomenon. During the 2000s, the unprecedented rise in U.S. housing prices was apparent to economists concerned with such issues, and not surprisingly many papers were written on the subject. What is surprising is the almost complete lack of connection in the academic economic literature between the dynamics of the housing market, the activity in the related securities markets and the regulatory environment that was fueling the interplay between the two. This relative silence on the matter from academic economists extends well into 2008 when the crisis had already begun to unfold and journalistic accounts of the connections between the housing 18 This story has been covered in many venues. See Johnson and Kwak (2010) for a good overview with historical perspective. DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 24 market, derivatives market and deregulation were being minutely covered in the financial (and popular) press. It is instructive to look at some representative economic work from the early to mid 2000s to see just what economists were focusing on in their analyses of the housing market, and what they were not. The paper I will discuss here, briefly, is Karl Case and Robert Shiller’s 2003 paper “Is There a Bubble in the Housing Market.” The central problem with this paper, as I discuss below, is its ontological presupposition about the nature and dynamics of the US housing market in the late 1990s-2000s—specifically, the presumption that the concepts and dynamics of the standard model of housing assets were unproblematically applicable and that the only relevant test of the authors’ hypotheses was comparison against data gathered according to that ontological presumption. Case and Shiller (2003) express their ontological presupposition through their definition of “housing bubble” and their test for its existence. The authors introduce their paper by noting that “the popular press is full of speculation that the United States, as well as other countries, is in a ‘housing bubble’ that is about to burst.” They indicate that the purpose of their paper is to get to the bottom of this. “But how do we know if the housing market is in a bubble?” they ask, implying that the reports in the popular press are mere opinion and that the methodology that will be brought to bear in their paper will be able to give a more definitive answer (Case and Shiller 2003, p. 299). To frame their analysis, they indicate what, precisely, they will be calling a “bubble.” [I]n its widespread use the term [“bubble”] refers to a situation in which excessive public expectations of future price increases cause prices to be temporarily elevated. ... But the mere fact of rapid price increases is not in itself conclusive evidence of a bubble. The basic questions that still must be answered are whether expectations of large future price increases are sustaining the market, whether these expectations are salient enough to generate anxieties among potential homebuyers, and whether there is sufficient confidence in such expectations to motivate actions. (Case and Shiller 2003, pp. 299-300) DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 25 In order to test for the presence of a bubble so-defined, the authors consider two types of evidence. First, in order to test whether the price rise is actually caused by “fundamentals” the authors “analyze U.S. state-level data on home prices and the ‘fundamentals,’ including income, over a period of seventy-one quarters from 1985 to 2002.” Second, to test for the presence of the subjective factors in their definition of a bubble, the authors “present the results of a new questionnaire survey conducted in 2003 of people who bought homes in 2002 in four metropolitan areas: Los Angeles, San Francisco, Boston and Milwaukee” (Case and Shiller 2003, p. 300). With respect to “the fundamentals,” the authors conduct various regression analyses of housing prices against several measures of economic fundamentals including income, unemployment, mortgage rates and housing starts. They conclude that “income alone explains patterns of home price changes in all but eight states” and that they “cannot reject the hypothesis that a bubble exists in these [eight] states” on the basis of these regressions (Case and Shiller 2003, p. 312). On the basis of their surveys, they conclude that homebuyers are for the most part unsophisticated agents who “are highly involved with the market at the time of the home purchase and may overreact at times to price changes and to simple stories, resulting in substantial momentum in housing prices” (Case and Shiller 2003, p. 337). On the basis of this evidence, the authors’ conclusion is that there is some, limited evidence of a housing bubble in some states, but that activity in the housing market is unlikely to have a significant negative effect on the economy. “[J]udging from the historical record,” they write, “a nationwide drop in real housing prices is unlikely, and the drops in different cities are not likely to be synchronous: some will probably not occur for a number of years. Such a lack of synchrony would blunt the impact on the aggregate economy of the bursting housing bubble” (Case and Shiller 2003, p. 342). DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 26 We now know that the authors were wrong in their relatively mild assessment of the potential problems brewing in the housing market, but the point is not that they were wrong but why they were wrong. In considering the question of whether there was a problem in the housing market with respect to asset valuation, the authors envisioned the market as a standard asset market and tested their model against data that was only capable of giving them evidence about the internal consistency of their rational reconstruction of the situation. The results of their empirical analysis were expressible within their presumed ontology of the housing market, but this was not enough to indicate whether or not their results were a description of the actual housing market or just a logically possible but inaccurate version. What Case and Shiller (2003), along with most other economic analyses of the housing market and its attendant financial markets during this time, failed to grasp was no small matter— it was the heart of what led to the worst global economic crisis in eighty years. But significantly, the crisis did not catch everyone by surprise. Many of those with closer acquaintance with the day to day workings of the mortgage markets understood just what type of problem we were facing long before economists did, as Case and Shiller’s reference to concerns about the housing bubble in the financial and popular press implies. There was important and highly relevant information to be garnered from within certain communities that were well aware of the peculiar ontology of the late 1990s-2000s U.S. housing market, including that of mortgage originators, investment banking employees involved in creating and marketing structured investment products, employees of credit rating agencies, and legislators (and their staffs) on the senate and house committees related to economics, finance and housing, to name just a few. It was the information circulating in these communities that was essential for an accurate assessment of academic economists’ ontological presumptions about the housing market. But because of those ontological presumptions, this information was not sought out—the need to do so was simply not DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 27 apparent to (methodologically mainstream)19 economists in general. As has become obvious by this time, the true nature of the problems in the housing market had massive empirical consequences, notwithstanding the ECP-ESP-FP assessment standards’ seal of approval on methods that rendered such problems invisible. It is important to recognize this problem and to hold the discipline of economics to account. Specifically, it is important to use the financial crisis as an object lesson in the types of assessment standards that are needed within academic economics to adequately deal with the role of ontological presumptions in our methodology. Doing so will require methods that are not currently in the standard toolkit of economics. Specifically, because what is needed is a set of standards for assessing the plausibility of one’s ontological presumptions in the study of a particular set of social phenomena, the assessment standard must include the ability to entertain alternative ontologies and to understand how to read ontologies of social contexts from the perspective of the individuals that constitute that context. Methods for doing this are alien to economics, but commonplace in anthropology and some strains of sociology. And fortunately, anthropologists and sociologists are already doing work in some areas that are the typical province of economists—for example, Donald MacKenzie (2006; 2009), Caitlin Zaloom (2006), Vincent Lepinay (2011) and Karen Ho’s (2009) work in the anthropology of finance, Annelise Riles’ (2011) work in the anthropology of legal communities supporting financial regulation, and Douglas Holmes’ (Forthcoming) work in the anthropology of central banking. As it stands, such work is a valuable resource for any economist writing about finance, central banking or any topic that touches on these areas. But, not surprisingly, the focus of these anthropologists is not always on precisely those issues with which economists would be most concerned. As such, in order to harness these methods in a manner most useful to economists, it 19 I include behavioral and experimental economists in this group. DRAFT: PLEASE DO NOT CITE WITHOUT AUTHOR’S PERMISSION 28 will be necessary for economists to bring them within the fold of economics and tailor them for the discipline’s specific needs. In this respect, I believe that there is a strong parallel between these anthropological methods and work in the field of statistics in the early 20th century. In the 1930s, recognizing the need to develop more rigorous standards for empirical testing, economists established the field of econometrics to tailor statistical methods to the needs and goals of economics. Today, we need such a field to develop standards for assessing the plausibility of ontological assumptions. While doing so would certainly not be a guarantee that economics as a discipline would never again fail to grasp important economic trends, it could (if given an institutional presence within markets and incorporated into the assessment standards of academic journals) at least guarantee that economists would feel compelled ask and answer the types of questions that would have made the recent epic failure of economics to see the financial crisis coming less likely.
منابع مشابه
The booming economics-made-fun genre: more than having fun, but less than economics imperialism
Over the last few years there seems to have been a sharp increase in the number of books that want to spread the news that economics is, or at least can be, fun. This paper sets out to explain in what senses economics is supposed to be fun. In particular, the books in what I will call the economics-made-fun genre will be compared first with papers and books written by economists with the explic...
متن کاملA Cross-Disciplinary Genre Analysis of Rhetorical Features of Research Article Introductions Written by Iranians
The notion of genre has received a great deal of attention both in discourse analytic studies as well as in the field of ESP/EAP course design. The present paper has attempted to use genre analysis to account for the rhetorical features of research article introductions written by Iranian academics in two disciplinary fields of Education and Economics. The corpus comprised 40 research article i...
متن کاملIdentifying Factors Affecting Fun in Workplace with Ethnography Approach
The purpose of this study is to identifying factors affecting Fun in Workplace in Army Physical Training Corps.The study type is developmental and mix method and to extract factors, ethnography methodology that is a qualitative method was used. To extract factors, ethnography methodology was used. Statistical population in this study consists of Army Physical Training Corps. In quantitative sec...
متن کاملEffects of Using Tenets of Genre Analysis on Iranian Pre-intermediate EFL Learners' Reading Comprehension
Abstract Genre-based approach has been the focus of interest in the teaching of language since the mid1980s. Genre is defined in terms of the use of language in conventionalized communicative settings. The present study made an attempt to scrutinize the effects of genre-based pedagogy on Iranian Pre-intermediate EFL leaners' reading comprehension as well as the interaction between their reading...
متن کامل